Search Results
Crazy FAST RAG | Ollama | Nomic Embedding Model | Groq API
Let's use Ollama's Embeddings to Build an App
Don’t Embed Wrong!
Building a RAG application using open-source models (Asking questions from a PDF using Llama2)
Superfast RAG with Llama 3 and Groq
Run ALL Your AI Locally in Minutes (LLMs, RAG, and more)
EASIEST Way to Fine-Tune a LLM and Use It With Ollama
Chat with Documents is Now Crazy Fast thanks to Groq API and Streamlit
$0 Embeddings (OpenAI vs. free & open source)
Real time RAG App using Llama 3.2 and Open Source Stack on CPU
This new AI is powerful and uncensored… Let’s run it
Fine Tune LLaMA 2 In FIVE MINUTES! - "Perform 10x Better For My Use Case"